Skip to content

doc: add AI/LLM-assisted contributions policy#14223

Merged
RonnyPfannschmidt merged 4 commits intopytest-dev:mainfrom
RonnyPfannschmidt:contributions-agents
Mar 4, 2026
Merged

doc: add AI/LLM-assisted contributions policy#14223
RonnyPfannschmidt merged 4 commits intopytest-dev:mainfrom
RonnyPfannschmidt:contributions-agents

Conversation

@RonnyPfannschmidt
Copy link
Member

Summary

  • Add an AI/LLM-Assisted Contributions Policy section to CONTRIBUTING.rst
  • Require mandatory disclosure of AI tool usage in all pull requests
  • State that purely agentic (unsupervised AI) contributions will result in a ban
  • Failure to disclose or abusive AI-generated PRs will result in a public ban
  • Include a Context subsection explaining the rationale behind the policy

Motivation

Unsupervised agentic tools have led to a rise in low-quality contributions that waste maintainer time. This policy makes expectations explicit and protects reviewers.

Made with Cursor

Add a policy section to CONTRIBUTING.rst requiring disclosure of AI tool
usage, rejecting purely agentic contributions, and outlining consequences
(public ban) for non-disclosure or abusive AI-generated PRs.

Includes context section explaining the rationale — unsupervised agentic
tools waste maintainer time and demonstrate disrespect for human reviewers.

Co-authored-by: Cursor AI <ai@cursor.sh>
Co-authored-by: Anthropic Claude <claude@anthropic.com>
Copilot AI review requested due to automatic review settings February 21, 2026 08:51
@RonnyPfannschmidt RonnyPfannschmidt added the skip news used on prs to opt out of the changelog requirement label Feb 21, 2026
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request adds an AI/LLM-Assisted Contributions Policy to the pytest project's CONTRIBUTING.rst file. The policy aims to address concerns about low-quality, AI-generated contributions by establishing clear disclosure requirements and boundaries for AI tool usage. The change includes both the policy itself and an extensive Context subsection explaining the rationale behind it.

Changes:

  • Adds a new "AI/LLM-Assisted Contributions Policy" section with mandatory disclosure requirements for AI tool usage
  • Establishes that purely agentic (unsupervised AI) contributions will result in bans
  • Includes a Context subsection explaining the motivation and reasoning behind the policy

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

CONTRIBUTING.rst Outdated
With the advent of unsupervised agentic tools like OpenClaw, there has been a rise in low-quality contributions
where an agent goes on a rampage of low-quality pull requests.
Oftentimes this looks like a human beginner with fresh agent access trying to learn,
but in reality it's just an unsupervised agentic tool wasting human time for bad-faith contributions.
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phrase "bad-faith contributions" may be too strong, as it implies intentional malice. Many contributors using unsupervised AI tools may not realize the impact rather than acting in bad faith. Consider rephrasing to "low-quality or inappropriate contributions" to focus on the outcome rather than attributing intent.

Suggested change
but in reality it's just an unsupervised agentic tool wasting human time for bad-faith contributions.
but in reality it's just an unsupervised agentic tool wasting human time by generating low-quality or inappropriate contributions.

Copilot uses AI. Check for mistakes.
CONTRIBUTING.rst Outdated
Context
~~~~~~~

With the advent of unsupervised agentic tools like OpenClaw, there has been a rise in low-quality contributions
Copy link

Copilot AI Feb 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reference to "OpenClaw" appears to be a specific tool name. If this is meant to be a general term or if the tool name is incorrect, this should be clarified. If it's a real tool, consider whether naming specific tools in the Context subsection could become outdated or be perceived as targeting specific products.

Suggested change
With the advent of unsupervised agentic tools like OpenClaw, there has been a rise in low-quality contributions
With the advent of unsupervised agentic tools, there has been a rise in low-quality contributions

Copilot uses AI. Check for mistakes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Disagree, using OpenClaw as an example is useful.

CONTRIBUTING.rst Outdated
The promise of AI was to free up human time to focus on more important things like family and friends.

Fully agentic contributions turn that around — there is no human learning or growing behind those bots,
only soulless, semi-functional prompt adherence.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also there's an assymetry at play. Someone is prioritizing what we review without making an equivalent time or money investment. When a contributor think an issue is important enough to work on it and does, they effectively gives time to influence the priorization of the project. Most github sponsor pay tier include a 'prioritize this issue', generally an expansive tier intended for companies. Those agentic contribution expect to prioritize what maintainers work on at near zero cost for them.

@RonnyPfannschmidt
Copy link
Member Author

im thinking of moving the context part to a blog post as the language is intentionally a bit more hostile, and it may be a bit misplaced in the docs that are intentionally more welcoming

@RonnyPfannschmidt RonnyPfannschmidt marked this pull request as draft February 21, 2026 10:43
@nicoddemus
Copy link
Member

We should also update https://github.com/pytest-dev/pytest/blob/main/.github/PULL_REQUEST_TEMPLATE.md with a clear note about AI disclosure and forbidding unsupervised agents.

@nicoddemus
Copy link
Member

im thinking of moving the context part to a blog post as the language is intentionally a bit more hostile, and it may be a bit misplaced in the docs that are intentionally more welcoming

I left a comment on that section: I think the suggestion made by Copilot is onpoint and should be in the same document, as I think context is important to convey our intentions clearly.

Address PR feedback:
- "evil" → "wasteful" (nicoddemus)
- "bad laziness" → "laziness" (nicoddemus)
- "With a human" → "Were the contribution made by an actual human" (nicoddemus)
- Em dashes → commas for document consistency
- Break long lines for source readability
- Incorporate asymmetry argument from Pierre-Sassoulas: agentic PRs
  prioritize reviewer attention at near-zero cost to the sender

Add AI/LLM disclosure checkboxes to the PR template with a link
to the full policy in CONTRIBUTING.rst.

Co-authored-by: Cursor AI <ai@cursor.sh>
Co-authored-by: Anthropic Claude <claude@anthropic.com>
@RonnyPfannschmidt
Copy link
Member Author

i agree, my personal frustrations need to be separated from this text which targets contributors - personal rants have a different place - the amount of ai slop i see these days creates the strange desire to climb up a mountain and scream at the night sky

@nicoddemus
Copy link
Member

I think this looks good, thanks @RonnyPfannschmidt!

@RonnyPfannschmidt RonnyPfannschmidt marked this pull request as ready for review February 21, 2026 21:45
Copy link
Member

@Pierre-Sassoulas Pierre-Sassoulas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look great, just a small suggestion

CONTRIBUTING.rst Outdated
Comment on lines +216 to +217
but in reality it's just an unsupervised agentic tool wasting human time
for bad-faith contributions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
but in reality it's just an unsupervised agentic tool wasting human time
for bad-faith contributions.
but in reality it's just an unsupervised agentic tool wasting maintainer time.

Let's assume good faith ? (I suppose the intent is still to help, although I already encountered AI researchers that were just trying their creation in the wild using my time, those were not deterred by common decency and are not going to be deterred by the contributing.rst)

@webknjaz
Copy link
Member

cc @sirosen ^

Copy link
Member

@webknjaz webknjaz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I shared this on Discord some time ago but in case not everyone saw, here's some more discussions/materials/tools:

**If you used AI/LLM tools** (e.g. GitHub Copilot, ChatGPT, Claude, or similar) to help with this PR, you **must** disclose it below. State which tools were used and to what extent. Purely agentic contributions are not accepted. See our [AI/LLM-Assisted Contributions Policy](https://github.com/pytest-dev/pytest/blob/main/CONTRIBUTING.rst#aillm-assisted-contributions-policy).

- [ ] This PR was made **without** AI/LLM assistance.
- [ ] This PR used AI/LLM assistance (describe tools and extent below).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be useful to get the prompts too.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

based on how i observe agent tool usage and refinement that would be a mess

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original prompt would be very useful to have indeed. Often more useful than the result tbh. And it would show the actual amount of effort that went into it which would be a great deterrent against low effort ai driven fix.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not something you can rightfully expect contributors to record and does not match with the wide variety of ways people use modern tooling. It also conveys zero information.

These checkboxes can only be used to discriminate against contributions based on non-technical merits. Meaning people will naturally self-correct towards not disclosing because of the discriminatory attitude the existence of this text implies.

I say this out of compassion. This will alienate contributors by trying to claim that how a change was made and by whom is more important than the substance.

Focus on the respectful interactions, outcomes, and code quality needs. Knowing what text editor or IDE someone used is 100% irrelevant.

See the first paragraph from https://devguide.python.org/getting-started/generative-ai/ for example.

(From a geeky agentic model use learning perspective: While I enjoy seeing prompts and sessions along with the exact specific details of which model version was used via what, that is an entirely unreasonable ask of people. It is not something that belongs in any project's policy.)

Copy link
Member

@nicoddemus nicoddemus Feb 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not something you can rightfully expect contributors to record and does not match with the wide variety of ways people use modern tooling. It also conveys zero information.

I kind agree with this; I almost never get something done using a AI coding tool with just one prompt, often I end up using more than one prompt. Are contributors expected to post the entire conversation in their PR description? This is not practical, and probably brings almost no value, that I can see at least.

In the future, it is very likely that every contribution will have some AI involved, ranging from basic AI features (like smarter auto complete) from full blown coding agent. Requiring users to disclose this fact does not seem very useful, TBH.


The intent from this policy, from my POV, is to stop/prevent fully automated contributions, where at no point a human was involved prior to the PR being submitted. I have no problem people using AI tools to contribute, and I don't think we need to require them to disclose using those tools -- both from not bringing much value as well as not being practical.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently theres no good way to track prompts and intents

sometimes i literally prompt by pasting issue link into the chat, sometimes i write down 5-10 paragraphs of detail and run a planning rounds with q&a by the agent - and then on top of that sometimes i do re-basing and squashing with the agent - currently im not aware of a way to propperly trace that in a sensible manner

also sometimes i do swear at the llms when they do something extra messy - and like with a good old hammer or saw or screwdriver i like that the tool keeps no memory of my language there

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed!

All the more reason disclosing that an LLM was used is not useful. Important thing is that there's a human reviewing and understanding the code before it is even submitted.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may have come out of the policy we're iterating on over in pip-tools, where I believe I was the one to suggest asking how tools were used.

My goal in such an inclusion is not that I particularly care what exact inputs were given to an LLM -- in fact, I really, really don't care, any more than I care what brand of pliers my electrician uses1 -- but rather that I want to understand what the contributor is even trying to do.

The intent behind changes is significant. If it's not aligned with the actual changes in the diff -- e.g., an apparently unrelated test is modified -- it prompts us to ask why those extra changes were necessary.

So I stand by the ethos behind the suggestion: understanding contributor intent. But clearly, based on the responses here, we'll have to get at that information in a different way.

Footnotes

  1. Oooh! Knipex? Cool!

@gpshead
Copy link

gpshead commented Feb 22, 2026

Require mandatory disclosure of AI tool usage in all pull requests

This is a non-starter and will fail. It is blatant discrimination.

Address PR review feedback: remove "mandatory disclosure", "public ban",
and "consequences" language. Replace hard disclosure requirement with
asking contributors to credit AI agents as co-authors via Co-authored-by
trailers. Rewrite Context section in a professional tone, removing
personal frustrations while keeping the core arguments about maintainer
time asymmetry and unsupervised agent risks.

Co-authored-by: Cursor AI <ai@cursor.sh>
Co-authored-by: Claude claude-4.6-opus-high-thinking <noreply@anthropic.com>
@RonnyPfannschmidt
Copy link
Member Author

I want this to be merged as merge commit so we have the iteration of the wording from starting as something almost a rage post to born from frustration to the new language that reflects and takes the important and mindful feedback from people who care into account

thanks to everyone for calling out the language issues and barriers here - that’s indispensable help to go from something that comes form raw frustration to something that can ship and sail

@RonnyPfannschmidt RonnyPfannschmidt marked this pull request as ready for review February 23, 2026 09:38
@bluetech
Copy link
Member

Personally the main thing I would want from an AI policy is that the issues, PR descriptions and comments (or more generally "communication"), are written by humans. I simply do not want to talk with some machine. The code is actually sometimes decent and getting better. (I note that this very PR's description is in violation of this...)

As a bonus, I think that requiring human communication would much reduce the automated PRs (at least from honest users) since IMO their users are unlikely to create them if it requires any actual effort from them...

@nicoddemus
Copy link
Member

I agree with @bluetech: the main thing is that we want to discuss/talk with a human, not a machine. I don't particularly care if the initial contribution was solely written by AI, but if I request further changes or ask questions, I want to talk with a human, not the LLM.

I recall one specific example where the interaction was like:

me: please change this test so it does A, B and C.
user: OK, changed it.
me: still breaking, please fix it.
user: updated the test, now it is passing! -- here it actually reverted the test to the first version, which I asked to change. Clearly it was not a person, but an AI.

This is of course frustrating and a waste of my time as reviewer/maintainer.

Comment on lines +14 to +16
**If you used AI agents** (e.g. Cursor Agent, Copilot Workspace, Claude Code, or similar) to generate code or commits, credit them as co-authors using `Co-authored-by` trailers in your commit messages. Purely agentic contributions are not accepted. See our [AI/LLM-Assisted Contributions Policy](https://github.com/pytest-dev/pytest/blob/main/CONTRIBUTING.rst#aillm-assisted-contributions-policy).

- [ ] Any AI agents used are credited as co-authors in commit messages.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally don't think disclosing that an Agent was used is particularly useful, and also very hard to policy/enforce.

How about we instead focus on the main issue, that fully automatic contributions are not accepted?

Suggested change
**If you used AI agents** (e.g. Cursor Agent, Copilot Workspace, Claude Code, or similar) to generate code or commits, credit them as co-authors using `Co-authored-by` trailers in your commit messages. Purely agentic contributions are not accepted. See our [AI/LLM-Assisted Contributions Policy](https://github.com/pytest-dev/pytest/blob/main/CONTRIBUTING.rst#aillm-assisted-contributions-policy).
- [ ] Any AI agents used are credited as co-authors in commit messages.
> [!IMPORTANT]
> **Purely agentic contributions are not accepted**. See our [AI/LLM-Assisted Contributions Policy](https://github.com/pytest-dev/pytest/blob/main/CONTRIBUTING.rst#aillm-assisted-contributions-policy).

Perhaps focus on the "unsupervised" aspect:

Suggested change
**If you used AI agents** (e.g. Cursor Agent, Copilot Workspace, Claude Code, or similar) to generate code or commits, credit them as co-authors using `Co-authored-by` trailers in your commit messages. Purely agentic contributions are not accepted. See our [AI/LLM-Assisted Contributions Policy](https://github.com/pytest-dev/pytest/blob/main/CONTRIBUTING.rst#aillm-assisted-contributions-policy).
- [ ] Any AI agents used are credited as co-authors in commit messages.
> [!IMPORTANT]
> **Unsupervised agentic contributions are not accepted**. See our [AI/LLM-Assisted Contributions Policy](https://github.com/pytest-dev/pytest/blob/main/CONTRIBUTING.rst#aillm-assisted-contributions-policy).

CONTRIBUTING.rst Outdated
Comment on lines +194 to +206
**Credit AI agents as co-authors.** When AI tools act as agents generating
code or commits (e.g. Cursor Agent, Copilot Workspace, Claude Code, or similar),
credit them using ``Co-authored-by`` trailers in your commit messages. This applies
to agentic use, not to incidental use like autocomplete or search.

**Purely agentic contributions are not accepted.** Pull requests that are entirely
generated by AI agents, with no meaningful human review, understanding, or oversight,
will be closed. Every contribution must demonstrate that a human has reviewed,
understood, and taken responsibility for the changes.

**Respect maintainer time.** Submitting low-effort AI-generated pull requests that
waste reviewer time may result in a ban from the project. Our maintainers are
volunteers, and contributions should reflect genuine engagement with the project.
Copy link
Member

@nicoddemus nicoddemus Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion 1: I would switch the order here, with the "Purely agentic contributions" section appearing first, because I believe this is the main problem we want to address.

Suggestion 2: As I mentioned earlier, I don't think we need to enforce crediting AI agents as co-authors, as this does not bring much value (IMHO) and is hard/impossible to enforce.


In that subject, one important thing is that using an agent as co-author cannot be an excuse for lack of understanding or poorly written code -- it does not matter if an agent wrote the code, if you pushed it, you are responsible for it. You cannot later say "oh I don't know why the code was written that way, it was not written by me, but by Claude". You need to take ownership of the code produced by the Agent, as ultimately the code is your responsibility.

This point written above express this well:

Every contribution must demonstrate that a human has reviewed,
understood, and taken responsibility for the changes.

"taken responsibility" here is key.

…and optional attribution

Shift from requiring disclosure of AI tool usage to emphasizing that
contributors own their submissions and must engage with reviewers
directly. Encourage (but do not require) AI tool attribution via
Co-authored-by trailers. PR template updated with a policy callout
and attribution checkbox.

Co-authored-by: Cursor AI <ai@cursor.sh>
Co-authored-by: Claude claude-4.6-opus-high-thinking <noreply@anthropic.com>
Copy link
Member

@nicoddemus nicoddemus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed the latest changes, they LGTM. 👍

@webknjaz
Copy link
Member

Stephen's PR with the final form of the initial policy is up FYI: https://pip-tools.rtfd.io/en/latest/contributing/#project-contribution-guidelines

@RonnyPfannschmidt RonnyPfannschmidt merged commit 3a1fe28 into pytest-dev:main Mar 4, 2026
33 checks passed
@RonnyPfannschmidt RonnyPfannschmidt deleted the contributions-agents branch March 4, 2026 09:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

skip news used on prs to opt out of the changelog requirement

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants